Humans demonstrate a variety of interesting behavioral characteristics when performing tasks, such as selecting between seemingly equivalent optimal actions, performing recovery actions when deviating from the optimal trajectory, or moderating actions in response to sensed risks. However, imitation learning, which attempts to teach robots to perform these same tasks from observations of human demonstrations, often fails to capture such behavior. Specifically, commonly used learning algorithms embody inherent contradictions between the learning assumptions (e.g., single optimal action) and actual human behavior (e.g., multiple optimal actions), thereby limiting robot generalizability, applicability, and demonstration feasibility. To address this, this paper proposes designing imitation learning algorithms with a focus on utilizing human behavioral characteristics, thereby embodying principles for capturing and exploiting actual demonstrator behavioral characteristics. This paper presents the first imitation learning framework, Bayesian Disturbance Injection (BDI), that typifies human behavioral characteristics by incorporating model flexibility, robustification, and risk sensitivity. Bayesian inference is used to learn flexible non-parametric multi-action policies, while simultaneously robustifying policies by injecting risk-sensitive disturbances to induce human recovery action and ensuring demonstration feasibility. Our method is evaluated through risk-sensitive simulations and real-robot experiments (e.g., table-sweep task, shaft-reach task and shaft-insertion task) using the UR5e 6-DOF robotic arm, to demonstrate the improved characterisation of behavior. Results show significant improvement in task performance, through improved flexibility, robustness as well as demonstration feasibility.
translated by 谷歌翻译
Scenarios requiring humans to choose from multiple seemingly optimal actions are commonplace, however standard imitation learning often fails to capture this behavior. Instead, an over-reliance on replicating expert actions induces inflexible and unstable policies, leading to poor generalizability in an application. To address the problem, this paper presents the first imitation learning framework that incorporates Bayesian variational inference for learning flexible non-parametric multi-action policies, while simultaneously robustifying the policies against sources of error, by introducing and optimizing disturbances to create a richer demonstration dataset. This combinatorial approach forces the policy to adapt to challenging situations, enabling stable multi-action policies to be learned efficiently. The effectiveness of our proposed method is evaluated through simulations and real-robot experiments for a table-sweep task using the UR3 6-DOF robotic arm. Results show that, through improved flexibility and robustness, the learning performance and control safety are better than comparison methods.
translated by 谷歌翻译
This study proposes novel control methods that lower impact force by preemptive movement and smoothly transition to conventional contact impedance control. These suggested techniques are for force control-based robots and position/velocity control-based robots, respectively. Strong impact forces have a negative influence on multiple robotic tasks. Recently, preemptive impact reduction techniques that expand conventional contact impedance control by using proximity sensors have been examined. However, a seamless transition from impact reduction to contact impedance control has not yet been accomplished. The proposed methods utilize a serial combined impedance control framework to solve this problem. The preemptive impact reduction feature can be added to the already implemented impedance controller because the parameter design is divided into impact reduction and contact impedance control. There is no undesirable contact force during the transition. Furthermore, even though the preemptive impact reduction employs a crude optical proximity sensor, the influence of reflectance is minimized using a virtual viscous force. Analyses and real-world experiments confirm these benefits.
translated by 谷歌翻译
文本到图像模型最近通过光合现实质量看似准确的样本取得了巨大的成功。但是,随着最先进的语言模型仍在努力评估精确陈述,基于语言模型的图像生成过程也是如此。在这项工作中,我们展示了最先进的文本对图像模型(例如Dall-e)的问题,并通过与Draw基准基准相关的语句生成准确的样本。此外,我们表明剪辑无法始终如一地重新读取这些样品。为此,我们提出了Logicrank,这是一种神经符号推理框架,可以为这种精确要求设置提供更准确的排名系统。Logicrank平稳地集成到文本到图像模型的生成过程中,而且可以用于进一步调整更逻辑的精确模型。
translated by 谷歌翻译
我们提出了一个框架,该框架会自动将不可缩放的GNN转换为基于预典型的GNN,该GNN对于大型图表有效且可扩展。我们框架的优势是两倍。1)它通过将局部特征聚合与其图形卷积中的重量学习分开,2)通过将其边缘分解为小型图形,将其有效地在GPU上进行了预先执行,将各种局部特征聚合与重量学习分开,将各种局部特征聚合从重量学习中分离出来,从而使各种不可估计的GNN转换为大规模图表。和平衡的集合。通过大规模图的广泛实验,我们证明了转化的GNN在训练时间内的运行速度比现有的GNN更快,同时实现了最先进的GNN的竞争精度。因此,我们的转型框架为可伸缩GNN的未来研究提供了简单有效的基础。
translated by 谷歌翻译
将差异化随机梯度下降(DPSGD)应用于培训现代大规模神经网络(例如基于变压器的模型)是一项艰巨的任务,因为在每个迭代尺度上添加了噪声的幅度,都具有模型维度,从而阻碍了学习能力显著地。我们提出了一个统一的框架,即$ \ textsf {lsg} $,该框架充分利用了神经网络的低级别和稀疏结构,以减少梯度更新的维度,从而减轻DPSGD的负面影响。首先使用一对低级矩阵近似梯度更新。然后,一种新颖的策略用于稀疏梯度,从而导致低维,较少的嘈杂更新,这些更新尚未保留神经网络的性能。关于自然语言处理和计算机视觉任务的经验评估表明,我们的方法的表现优于其他最先进的基线。
translated by 谷歌翻译
联合学习是一种分布式的机器学习方法,其中单个服务器和多个客户端在不共享客户端数据集的情况下协作构建机器学习模型。联合学习的一个具有挑战性的问题是数据异质性(即,数据分布在客户端可能有所不同)。为了应对这个问题,众多联合学习方法旨在为客户提供个性化的联合学习,并为客户建立优化的模型。尽管现有研究通过经验评估了自己的方法,但这些研究中的实验环境(例如比较方法,数据集和客户设置)彼此不同,目前尚不清楚哪种个性化的联邦学习方法可以实现最佳性能,以及取得多少进展,可以进行多大进展。通过使用这些方法而不是标准(即非个人化)联合学习来制作。在本文中,我们通过全面的实验基准了现有的个性化联合学习的性能,以评估每种方法的特征。我们的实验研究表明,(1)没有冠军方法,(2)大数据异质性通常会导致高准确的预测,并且(3)具有微调的标准联合学习方法(例如FedAvg)通常超过了个性化的联邦学习方法。我们为研究人员开放基准工具FedBench,以通过各种实验环境进行实验研究。
translated by 谷歌翻译
如今,为了改善服务和城市地区的宜居性,全世界正在进行多个智能城市计划。 SmartSantander是西班牙桑坦德市的一个智能城市项目,该项目依靠无线传感器网络技术在城市内部部署异质传感器,以测量多个参数,包括户外停车信息。在本文中,我们使用SmartSantander的300多个户外停车传感器的历史数据研究了停车场可用性的预测。我们设计了一个图形模型,以捕获停车场的定期波动和地理位置。为了开发和评估我们的模型,我们使用了桑坦德市的3年停车场可用性数据集。与现有的序列到序列模型相比,我们的模型具有很高的精度,该模型足够准确,可以在城市提供停车信息服务。我们将模型应用于智能手机应用程序,以被公民和游客广泛使用。
translated by 谷歌翻译
图神经网络(GNN)在节点分类任务上取得了巨大成功。尽管对开发和评估GNN具有广泛的兴趣,但它们已经通过有限的基准数据集进行了评估。结果,现有的GNN评估缺乏来自图的各种特征的细粒分析。在此激励的情况下,我们对合成图生成器进行了广泛的实验,该实验可以生成具有控制特征以进行细粒分析的图形。我们的实证研究阐明了带有节点类标签的真实图形标签的四个主要特征的GNN的优势和劣势,即1)类规模分布(平衡与失衡),2)等级之间的边缘连接比例(均质VS之间)异性词),3)属性值(偏见与随机),4)图形大小(小与大)。此外,为了促进对GNN的未来研究,我们公开发布了我们的代码库,该代码库允许用户用各种图表评估各种GNN。我们希望这项工作为未来的研究提供有趣的见解。
translated by 谷歌翻译
对称性是本质上的无所话话,并且由许多物种的视觉系统感知,因为它有助于检测我们环境中的生态重要的物体类。对称感知需要抽象图像区域之间的非局部空间依赖性,并且其底层的神经机制仍然难以捉摸。在本文中,我们评估了深度神经网络(DNN)架构关于从示例学习对称感知的任务。我们证明了在对象识别任务上建模人类性能的前馈DNN,不能获取对称的一般概念。即使当DNN被重建以捕获非局部空间依赖项,例如通过`扩张的“卷曲和最近引入的”变压器“设计,也是如此。相比之下,我们发现经常性架构能够通过将非局部空间依赖性分解成一系列本地操作来学习对称性,这对于新颖的图像来说是可重复使用的。这些结果表明,经常性联系可能在人工系统中对称性感知中发挥重要作用,也可能是生物学的。
translated by 谷歌翻译